back end
hls4ml: A Flexible, Open-Source Platform for Deep Learning Acceleration on Reconfigurable Hardware
Schulte, Jan-Frederik, Ramhorst, Benjamin, Sun, Chang, Mitrevski, Jovan, Ghielmetti, Nicolò, Lupi, Enrico, Danopoulos, Dimitrios, Loncar, Vladimir, Duarte, Javier, Burnette, David, Laatu, Lauri, Tzelepis, Stylianos, Axiotis, Konstantinos, Berthet, Quentin, Wang, Haoyan, White, Paul, Demirsoy, Suleyman, Colombo, Marco, Aarrestad, Thea, Summers, Sioni, Pierini, Maurizio, Di Guglielmo, Giuseppe, Ngadiuba, Jennifer, Campos, Javier, Hawks, Ben, Gandrakota, Abhijith, Fahim, Farah, Tran, Nhan, Constantinides, George, Que, Zhiqiang, Luk, Wayne, Tapper, Alexander, Hoang, Duc, Paladino, Noah, Harris, Philip, Lai, Bo-Cheng, Valentin, Manuel, Forelli, Ryan, Ogrenci, Seda, Gerlach, Lino, Flynn, Rian, Liu, Mia, Diaz, Daniel, Khoda, Elham, Quinnan, Melissa, Solares, Russell, Parajuli, Santosh, Neubauer, Mark, Herwig, Christian, Tsoi, Ho Fung, Rankin, Dylan, Hsu, Shih-Chieh, Hauck, Scott
We present hls4ml, a free and open-source platform that translates machine learning (ML) models from modern deep learning frameworks into high-level synthesis (HLS) code that can be integrated into full designs for field-programmable gate arrays (FPGAs) or application-specific integrated circuits (ASICs). With its flexible and modular design, hls4ml supports a large number of deep learning frameworks and can target HLS compilers from several vendors, including Vitis HLS, Intel oneAPI and Catapult HLS. Together with a wider eco-system for software-hardware co-design, hls4ml has enabled the acceleration of ML inference in a wide range of commercial and scientific applications where low latency, resource usage, and power consumption are critical. In this paper, we describe the structure and functionality of the hls4ml platform. The overarching design considerations for the generated HLS code are discussed, together with selected performance results.
- Europe > Switzerland > Zürich > Zürich (0.14)
- North America > United States > New York > New York County > New York City (0.04)
- North America > United States > California > San Diego County > San Diego (0.04)
- (26 more...)
- Information Technology (1.00)
- Government > Regional Government > North America Government > United States Government (0.93)
- Health & Medicine > Therapeutic Area (0.92)
- Energy (0.67)
Dropping the D: RGB-D SLAM Without the Depth Sensor
Kiray, Mert, Karaomer, Alican, Busam, Benjamin
We present DropD-SLAM, a real-time monocular SLAM system that achieves RGB-D-level accuracy without relying on depth sensors. The system replaces active depth input with three pretrained vision modules: a monocular metric depth estimator, a learned keypoint detector, and an instance segmentation network. Dynamic objects are suppressed using dilated instance masks, while static keypoints are assigned predicted depth values and backprojected into 3D to form metrically scaled features. These are processed by an unmodified RGB-D SLAM back end for tracking and mapping. On the TUM RGB-D benchmark, DropD-SLAM attains 7.4 cm mean ATE on static sequences and 1.8 cm on dynamic sequences, matching or surpassing state-of-the-art RGB-D methods while operating at 22 FPS on a single GPU. These results suggest that modern pretrained vision models can replace active depth sensors as reliable, real-time sources of metric scale, marking a step toward simpler and more cost-effective SLAM systems.
DesCartes Builder: A Tool to Develop Machine-Learning Based Digital Twins
de Conto, Eduardo, Genest, Blaise, Easwaran, Arvind, Ng, Nicholas, Menon, Shweta
Digital twins (DTs) are increasingly utilized to monitor, manage, and optimize complex systems across various domains, including civil engineering. A core requirement for an effective DT is to act as a fast, accurate, and maintainable surrogate of its physical counterpart, the physical twin (PT). To this end, machine learning (ML) is frequently employed to (i) construct real-time DT prototypes using efficient reduced-order models (ROMs) derived from high-fidelity simulations of the PT's nominal behavior, and (ii) specialize these prototypes into DT instances by leveraging historical sensor data from the target PT. Despite the broad applicability of ML, its use in DT engineering remains largely ad hoc. Indeed, while conventional ML pipelines often train a single model for a specific task, DTs typically require multiple, task- and domain-dependent models. Thus, a more structured approach is required to design DTs. In this paper, we introduce DesCartes Builder, an open-source tool to enable the systematic engineering of ML-based pipelines for real-time DT prototypes and DT instances. The tool leverages an open and flexible visual data flow paradigm to facilitate the specification, composition, and reuse of ML models. It also integrates a library of parameterizable core operations and ML algorithms tailored for DT design. We demonstrate the effectiveness and usability of DesCartes Builder through a civil engineering use case involving the design of a real-time DT prototype to predict the plastic strain of a structure.
- Asia > Singapore > Central Region > Singapore (0.05)
- South America > Brazil (0.04)
- Europe > France (0.04)
- Europe > Austria > Upper Austria > Linz (0.04)
Turning point for artificial intelligence: Will the large cloud providers dominate?
Artificial intelligence and machine learning requires huge amounts of processing capacity and data storage, making the cloud the preferred option. That raises the specter of a few cloud giants dominating AI applications and platforms. Could the tech giants take control of the AI narrative and reduce choices for enterprises? Not necessarily, but with some caveats, AI experts emphasize. But the large cloud providers are definitely in a position to control the AI narrative from several perspectives.
How Kubernetes will Benefit Artificial Intelligence Development
Despite economic anxiety brought on by the 2020 pandemic, investors chose to put their money into artificial intelligence development at a higher rate than in the previous year. Overall, 2020 investment in AI startups exceeded 40 billion US dollars, with an increase of 9.3% from 2019 numbers, according to the Artificial Intelligence Index Report 2021 by Stanford University. Rapid growth required rapid changes in AI development, deployment, staging, and integration with other platforms. Organizations turned to containerized applications orchestrated by the Kubernetes platform to scale efficiently and move data across machines. Google's open-sourced Kubernetes platform provides software engineers and developers the opportunity to launch containerized applications in a managed environment across the application's life cycle.
AI Compilers and the Race to the Bottom
Creating intelligence requires a lot of data. And all of that data needs technologies that can support it. In the case of artificial intelligence (AI), these technologies include large amounts of direct-access, high-speed memory; parallel computing architectures that are capable of processing different parts of the same dataset simultaneously; and, somewhat surprisingly, lower-precision computing than many other applications. An almost endless supply of this technology mix is available in the data center. AI development tools were therefore designed for the data center infrastructure behind applications like internet queries, voice search, and online facial recognition.
- North America > United States > Arizona (0.05)
- Asia > Middle East > Jordan (0.05)
Python vs. JavaScript for AI: Which one should you choose?
The use of artificial intelligence (AI) is growing at an exponential rate. Businesses are using AI to leverage benefits such as lower costs, increased productivity, and reduced manual errors. Those benefits are so palpable that today, 30% of all companies worldwide are using AI for at least one of their sales processes. But it's also natural to ask yourself which language you should choose for programming AI algorithms. After a little digging, you'll surely find that Python and JavaScript are two top contenders.
Beware of implementing machine learning models as black box tools
In data science and machine learning, mathematical skills are as important as programming skills. There are so many good packages that can be used for building predictive models. Thanks to the above-mentioned packages, pretty much everyone with some basic understanding of data science can build a machine learning model. However, it's important that before using these packages, you master the fundamentals of data science, that way you are not using these packages simply as blackbox tools. In this article, we illustrate how in-depth knowledge of the back end of a machine learning algorithm is important and crucial for building efficient and reliable models.
Key factors driving AI in 2020 - TechHQ
The Artificial Intelligence (AI) that we first knew in the Science Fiction (Sci-Fi) movies of the 1980s was portrayed as fanciful magic, where computers would talk to us like humans and be able to understand our needs, hopes and perhaps even our emotional desires. The trouble, a quarter-century ago, was that the IT industry was conceptually capable of building the logic constructs and computation engines that would deliver AI, but even the smartest techies were held back by several factors… not all of which their fault. Today's AI has changed because the developers building it have produced vastly more sophisticated algorithms than those that were driving initial forays into this field. Secondly and crucially, our new AI systems have also benefitted from access to massively widened datasets that were never available before the birth of the Internet and cloud data centers. Thirdly, computers have quite simply become more powerful. They have become faster at processing (with some boosted by the additional charge offered by Graphics Processing Unit technology), bigger in terms of their data storage capacity and more intelligently internetworked into clusters of computing power across distributed networks.
LIDA: Lightweight Interactive Dialogue Annotator
Collins, Edward, Rozanov, Nikolai, Zhang, Bingbing
Dialogue systems have the potential to change how people interact with machines but are highly dependent on the quality of the data used to train them. It is therefore important to develop good dialogue annotation tools which can improve the speed and quality of dialogue data annotation. With this in mind, we introduce LIDA, an annotation tool designed specifically for conversation data. As far as we know, LIDA is the first dialogue annotation system that handles the entire dialogue annotation pipeline from raw text, as may be the output of transcription services, to structured conversation data. Furthermore it supports the integration of arbitrary machine learning models as annotation recommenders and also has a dedicated interface to resolve inter-annotator disagreements such as after crowdsourcing annotations for a dataset. LIDA is fully open source, documented and publicly available [ https://github.com/Wluper/lida ]
- Europe > United Kingdom > England > Greater London > London (0.05)
- North America > United States > Pennsylvania > Philadelphia County > Philadelphia (0.04)